Goto

Collaborating Authors

 instance-level loss


Aggregating Data for Optimal and Private Learning

Agarwal, Sushant, Makhija, Yukti, Saket, Rishi, Raghuveer, Aravindan

arXiv.org Artificial Intelligence

In many applications however, due to lack of instrumentation or annotators [ Chen et al., 2004, Dery et al., 2017 ], or privacy constraints [ Rueping, 2010 ], instance-wise labels may not be available. Instead, the dat aset is partitioned into disjoint sets or bags of instances, and for each bag only one bag-label is available to the learner. The bag-label is derived from th e undisclosed instance-labels present in the bag via some agg regation function depending on the scenario. The goal is to train a model predicting the labels of individual i nstances. We call this paradigm as learning from aggregate labels, which directly generalizes traditional supervised learning, the latter being the special case of unit-sized bags. The two formalizations of our focus are ( i) multiple instance regression (MIR) where the bag-label is one of the instance-labels of the bag, and the in stance whose label is chosen as the bag-label is not revealed, and (ii) learning from label proportions (LLP) in which the bag-label is the average of the bag's instance-labels. In MIR as well as in LLP, our work considers real-valued instance-labels with regression as the underlying instance-level task.


Learning from Aggregate responses: Instance Level versus Bag Level Loss Functions

Javanmard, Adel, Chen, Lin, Mirrokni, Vahab, Badanidiyuru, Ashwinkumar, Fu, Gang

arXiv.org Artificial Intelligence

Due to the rise of privacy concerns, in many practical applications the training data is aggregated before being shared with the learner, in order to protect privacy of users' sensitive responses. In an aggregate learning framework, the dataset is grouped into bags of samples, where each bag is available only with an aggregate response, providing a summary of individuals' responses in that bag. In this paper, we study two natural loss functions for learning from aggregate responses: bag-level loss and the instance-level loss. In the former, the model is learnt by minimizing a loss between aggregate responses and aggregate model predictions, while in the latter the model aims to fit individual predictions to the aggregate responses. In this work, we show that the instance-level loss can be perceived as a regularized form of the bag-level loss. This observation lets us compare the two approaches with respect to bias and variance of the resulting estimators, and introduce a novel interpolating estimator which combines the two approaches. For linear regression tasks, we provide a precise characterization of the risk of the interpolating estimator in an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis allows us to theoretically understand the effect of different factors, such as bag size on the model prediction risk. In addition, we propose a mechanism for differentially private learning from aggregate responses and derive the optimal bag size in terms of prediction risk-privacy trade-off. We also carry out thorough experiments to corroborate our theory and show the efficacy of the interpolating estimator.